- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0002000000000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Yu, Fisher (2)
-
Chen, Nuo (1)
-
Darrell, Trevor (1)
-
Dunlap, Lisa (1)
-
Feng, Chen (1)
-
Gong, Moonjun (1)
-
Gonzalez, Joseph E. (1)
-
Jiang, Tao (1)
-
Li, Kenan (1)
-
Li, Sihang (1)
-
Li, Yiming (1)
-
Li, Zhiheng (1)
-
Liu, Xinhao (1)
-
Ma, Yi-An (1)
-
Mirhoseini, Azalia (1)
-
Wang, Ruth (1)
-
Wang, Xin (1)
-
Wang, Yue (1)
-
Wang, Zijun (1)
-
Yu, Zhiding (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Wang, Xin; Yu, Fisher; Dunlap, Lisa; Ma, Yi-An; Wang, Ruth; Mirhoseini, Azalia; Darrell, Trevor; Gonzalez, Joseph E. (, Uncertainty in artificial intelligence)Larger networks generally have greater representational power at the cost of increased computational complexity. Sparsifying such networks has been an active area of research but has been generally limited to static regularization or dynamic approaches using reinforcement learning. We explore a mixture of experts (MoE) approach to deep dynamic routing, which activates certain experts in the network on a per-example basis. Our novel DeepMoE architecture increases the representational power of standard convolutional networks by adaptively sparsifying and recalibrating channel-wise features in each convolutional layer. We employ a multi-headed sparse gating network to determine the selection and scaling of channels for each input, leveraging exponential combinations of experts within a single convolutional network. Our proposed architecture is evaluated on four benchmark datasets and tasks, and we show that Deep-MoEs are able to achieve higher accuracy with lower computation than standard convolutional networks.more » « less
An official website of the United States government

Full Text Available